Ensure we properly support structured outputs#49
Conversation
|
I'm running 7.0-RC2-62242 and among other models have qwen3.6:latest available, but keep getting this error:
|
…aces. Set timeout for text generation to 60 seconds
I had run into timeouts a few times in testing though not consistently. That 30 second timeout comes upstream from the AI Client but just pushed up a fix that changes that default to 60 seconds when using this provider. That seemed high enough in my testing to never hit timeouts but let me know if you're still having issues. |
|
Not surprisingly, now getting:
I've got a whole bunch of older models locally but also some newer, larger ones. Perhaps I'm getting routed to a poorly performing model here? |
Hmm.. maybe. The issue we have right now is there's no way to easily check which model was used in the request. If you can, can you install this version of the AI plugin (this is from PR 437): pr-437-f588fc861783a4ec9306ca070a51a25e40c6dfd4.zip Then go to the AI settings page and activate the AI Request Logging experiment. This should then show a page under
|
|
Interesting. I'm using the same model but not running into this. Is it consistent across all features? Or just the review notes experiment? |
|
Updated Ollama to 0.21.1 and classification and review notes both work, let's go! |




Description of the Change
Ollama supports models that support structured outputs but we weren't properly parsing those ourselves. This means any API request that requested a structured output wouldn't work properly.
How to test the Change
Changelog Entry
Credits
Props @dkotter
Checklist: